An Efficient Proximal Gradient Method for General Structured Sparse Learning

نویسندگان

  • Xi Chen
  • Qihang Lin
  • Seyoung Kim
  • Jaime G. Carbonell
  • Eric P. Xing
چکیده

We study the problem of learning high dimensional regression models regularized by a structured-sparsity-inducing penalty that encodes prior structural information on either input or output sides. We consider two widely adopted types of such penalties as our motivating examples: 1) overlapping-group-lasso penalty, based on `1/`2 mixed-norm, and 2) graph-guided fusion penalty. For both types of penalties, due to their non-separability, developing an efficient optimization method has remained a challenging problem. In this paper, we propose a general optimization framework, called proximal gradient method, which can solve the structured sparse learning problems with a smooth convex loss and a wide spectrum of non-smooth and non-separable structured-sparsity-inducing penalties, including the overlapping-group-lasso and graph-guided fusion penalties. Our method exploits the structure of such penalties, decouples the non-separable penalty function via the dual norm, introduces its smooth approximation, and solves this approximation function. It achieves a convergence rate significantly faster than the standard first-order method, subgradient method, and is much more scalable than the most widely used method, namely interior-point method for second-order cone programming and quadratic programming formulations. The efficiency and scalability of our method are demonstrated on both simulated and real genetic datasets.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An Efficient Proximal Gradient Method for General Structured Structured Sparse Learning

We study the problem of learning regression models regularized by the structured sparsity-inducing penalty which encodes the prior structural information. We consider two most widely adopted structures as motivating examples: (1) group structure (might overlap) which is encoded via `1/`2 mixed norm penalty; (2) graph structure which is encoded in graph-guided fusion penalty. For both structures...

متن کامل

Smoothing Proximal Gradient Method for General Structured Sparse Learning

We study the problem of learning high dimensional regression models regularized by a structured-sparsity-inducing penalty that encodes prior structural information on either input or output sides. We consider two widely adopted types of such penalties as our motivating examples: 1) overlapping group lasso penalty, based on the `1/`2 mixed-norm penalty, and 2) graph-guided fusion penalty. For bo...

متن کامل

A Smoothing Proximal Gradient Method for General Structured Sparse Regression

We study the problem of estimating high dimensional regression models regularized by a structured sparsity-inducing penalty that encodes prior structural information on either the input or output variables. We consider two widely adopted types of penalties of this kind as motivating examples: 1) the general overlapping-group-lasso penalty, generalized from the group-lasso penalty; and 2) the gr...

متن کامل

SMOOTHING PROXIMAL GRADIENT METHOD FOR GENERAL STRUCTURED SPARSE REGRESSION By

We study the problem of estimating high dimensional regression models regularized by a structured sparsity-inducing penalty that encodes prior structural information on either the input or output variables. We consider two widely adopted types of penalties of this kind as motivating examples: 1) the general overlapping-group-lasso penalty, generalized from the group-lasso penalty; and 2) the gr...

متن کامل

Smoothing proximal gradient method for general structured sparse regression

We study the problem of estimating high dimensional regression models regularized by a structured-sparsity-inducing penalty that encodes prior structural information on either input or output sides. We consider two widely adopted types of such penalties as our motivating examples: 1) overlapping-group-lasso penalty, based on the l1/l2 mixed-norm penalty, and 2) graph-guided fusion penalty. For ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010